dance movement
Reimagining Dance: Real-time Music Co-creation between Dancers and AI
Dance performance traditionally follows a unidirectional relationship where movement responds to music. While AI has advanced in various creative domains, its application in dance has primarily focused on generating choreography from musical input. We present a system that enables dancers to dynamically shape musical environments through their movements. Our multi-modal architecture creates a coherent musical composition by intelligently combining pre-recorded musical clips in response to dance movements, establishing a bidirectional creative partnership where dancers function as both performers and composers. Through correlation analysis of performance data, we demonstrate emergent communication patterns between movement qualities and audio features. This approach reconceptualizes the role of AI in performing arts as a responsive collaborator that expands possibilities for both professional dance performance and improvisational artistic expression across broader populations.
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.40)
- North America > United States (0.05)
- Europe > Spain (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Media > Music (0.68)
- Leisure & Entertainment (0.68)
Align Your Rhythm: Generating Highly Aligned Dance Poses with Gating-Enhanced Rhythm-Aware Feature Representation
Fan, Congyi, Guan, Jian, Zhao, Xuanjia, Xu, Dongli, Lin, Youtian, Ye, Tong, Feng, Pengming, Pan, Haiwei
Automatically generating natural, diverse and rhythmic human dance movements driven by music is vital for virtual reality and film industries. However, generating dance that naturally follows music remains a challenge, as existing methods lack proper beat alignment and exhibit unnatural motion dynamics. In this paper, we propose Danceba, a novel framework that leverages gating mechanism to enhance rhythm-aware feature representation for music-driven dance generation, which achieves highly aligned dance poses with enhanced rhythmic sensitivity. Specifically, we introduce Phase-Based Rhythm Extraction (PRE) to precisely extract rhythmic information from musical phase data, capitalizing on the intrinsic periodicity and temporal structures of music. Additionally, we propose Temporal-Gated Causal Attention (TGCA) to focus on global rhythmic features, ensuring that dance movements closely follow the musical rhythm. We also introduce Parallel Mamba Motion Modeling (PMMM) architecture to separately model upper and lower body motions along with musical features, thereby improving the naturalness and diversity of generated dance movements. Extensive experiments confirm that Danceba outperforms state-of-the-art methods, achieving significantly better rhythmic alignment and motion diversity. Project page: https://danceba.github.io/ .
MusicInfuser: Making Video Diffusion Listen and Dance
Hong, Susung, Kemelmacher-Shlizerman, Ira, Curless, Brian, Seitz, Steven M.
We introduce MusicInfuser, an approach for generating high-quality dance videos that are synchronized to a specified music track. Rather than attempting to design and train a new multimodal audio-video model, we show how existing video diffusion models can be adapted to align with musical inputs by introducing lightweight music-video cross-attention and a low-rank adapter. Unlike prior work requiring motion capture data, our approach fine-tunes only on dance videos. MusicInfuser achieves high-quality music-driven video generation while preserving the flexibility and generative capabilities of the underlying models. We introduce an evaluation framework using Video-LLMs to assess multiple dimensions of dance generation quality. The project page and code are available at https://susunghong.github.io/MusicInfuser.
- North America > United States > New York (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Media > Music (0.88)
- Leisure & Entertainment (0.88)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
QEAN: Quaternion-Enhanced Attention Network for Visual Dance Generation
Zhou, Zhizhen, Huo, Yejing, Huang, Guoheng, Zeng, An, Chen, Xuhang, Huang, Lian, Li, Zinuo
The study of music-generated dance is a novel and challenging Image generation task. It aims to input a piece of music and seed motions, then generate natural dance movements for the subsequent music. Transformer-based methods face challenges in time series prediction tasks related to human movements and music due to their struggle in capturing the nonlinear relationship and temporal aspects. This can lead to issues like joint deformation, role deviation, floating, and inconsistencies in dance movements generated in response to the music. In this paper, we propose a Quaternion-Enhanced Attention Network (QEAN) for visual dance synthesis from a quaternion perspective, which consists of a Spin Position Embedding (SPE) module and a Quaternion Rotary Attention (QRA) module. First, SPE embeds position information into self-attention in a rotational manner, leading to better learning of features of movement sequences and audio sequences, and improved understanding of the connection between music and dance. Second, QRA represents and fuses 3D motion features and audio features in the form of a series of quaternions, enabling the model to better learn the temporal coordination of music and dance under the complex temporal cycle conditions of dance generation. Finally, we conducted experiments on the dataset AIST++, and the results show that our approach achieves better and more robust performance in generating accurate, high-quality dance movements. Our source code and dataset can be available from https://github.com/MarasyZZ/QEAN and https://google.github.io/aistplusplus_dataset respectively.
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Oceania > Australia > Western Australia (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Research Report > New Finding (0.48)
- Research Report > Promising Solution (0.46)
- Media (0.46)
- Leisure & Entertainment (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Dance Style Transfer with Cross-modal Transformer
Yin, Wenjie, Yin, Hang, Baraka, Kim, Kragic, Danica, Björkman, Mårten
We present CycleDance, a dance style transfer system to transform an existing motion clip in one dance style to a motion clip in another dance style while attempting to preserve motion context of the dance. Our method extends an existing CycleGAN architecture for modeling audio sequences and integrates multimodal transformer encoders to account for music context. We adopt sequence length-based curriculum learning to stabilize training. Our approach captures rich and long-term intra-relations between motion frames, which is a common challenge in motion transfer and synthesis work. We further introduce new metrics for gauging transfer strength and content preservation in the context of dance movements. We perform an extensive ablation study as well as a human study including 30 participants with 5 or more years of dance experience. The results demonstrate that CycleDance generates realistic movements with the target style, significantly outperforming the baseline CycleGAN on naturalness, transfer strength, and content preservation.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Questionnaire & Opinion Survey (0.94)
- Research Report > New Finding (0.35)
- Leisure & Entertainment (0.69)
- Media > Music (0.46)
Berman
This paper presents an interdisciplinary project which aims at cross-fertilizing dance with artificial intelligence. It utilizes AI as an approach to explore and unveil new territories of possible dance movements. Statistical analyzes of recorded human dance movements provide the foundation for a system that learns poses from human dancers, extends them with novel variations and creates new movement sequences. The system provides dancers with a tool for exploring possible movements and finding inspiration from motion sequences generated automatically in real time in the form of an improvising avatar. Experiences of bringing the avatar to the studio as a virtual dance partner indicate the usefulness of the software as a tool for kinetic exploration. In addition to these artistic results, the experiments also raise questions about how AI generally relates to artistic agency and creativity. Is the improvising avatar truly creative, or is it merely some kind of extension of the dancer or the AI researcher? By analyzing the developed platform as a framework for exploration of conceptual movement spaces, and by considering the interaction between dancer, researcher and software, some possible interpretations of this particular kind of creative process can be offered.
Scientists have developed a new program that can identify people based on how they dance
A team of researchers from the University of Jyväskylä in Finland have developed a new computer system that can identify individuals not through their faces or finger prints, but simply by watching them dance. For the experiment the team analyzed the movements of 73 participants as they danced to music in eight different genres, including blues, country, metal, reggae, rap, and more. They developed a machine learning program that would analyze 21 different points of articulation on each dancer's body through a motion capture camera, and combined that data with some general information about each participant. The team found that the machine learning program was able to accurately identify which of the 73 participants was dancing just by capturing their movements 94 percent of the time. 'It seems as though a person's dance movements are a kind of fingerprint,' researcher Dr. Pasi Saari told Eurekalert.
The way you dance is unique, and computers can tell it's you
Studying how people move to music is a powerful tool for researchers looking to understand how and why music affects us the way it does. Over the last few years, researchers at the Centre for Interdisciplinary Music Research at the University of Jyväskylä in Finland have used motion capture technology -- the same kind used in Hollywood -- to learn that your dance moves say a lot about you, such as how extroverted or neurotic you are, what mood you happen to be in, and even how much you empathize with other people. Recently, however, they discovered something that surprised them. "We actually weren't looking for this result, as we set out to study something completely different," explains Dr. Emily Carlson, the first author of the study. "Our original idea was to see if we could use machine learning to identify which genre of music our participants were dancing to, based on their movements."
An Extensive Review of Computational Dance Automation Techniques and Applications
Joshi, Manish, Jadhav, Sangeeta
Dance is an art and when technology meets this kind of art, it's a novel attempt in itself. Several researchers have attempted to automate several aspects of dance, right from dance notation to choreography. Furthermore, we have encountered several applications of dance automation like e-learning, heritage preservation, etc. Despite several attempts by researchers for more than two decades in various styles of dance all round the world, we found a review paper that portrays the research status in this area dating to 1990 \cite{politis1990computers}. Hence, we decide to come up with a comprehensive review article that showcases several aspects of dance automation. This paper is an attempt to review research work reported in the literature, categorize and group all research work completed so far in the field of automating dance. We have explicitly identified six major categories corresponding to the use of computers in dance automation namely dance representation, dance capturing, dance semantics, dance generation, dance processing approaches and applications of dance automation systems. We classified several research papers under these categories according to their research approach and functionality. With the help of proposed categories and subcategories one can easily determine the state of research and the new avenues left for exploration in the field of dance automation.
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- North America > Canada (0.04)
- (6 more...)
- Media (1.00)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- Education > Educational Setting > Online (0.34)